62 research outputs found

    ADMM-Based Hyperspectral Unmixing Networks for Abundance and Endmember Estimation

    Get PDF
    Hyperspectral image (HSI) unmixing is an increasingly studied problem in various areas, including remote sensing. It has been tackled using both physical model-based approaches and more recently machine learning-based ones. In this article, we propose a new HSI unmixing algorithm combining both model- and learning-based techniques, based on algorithm unrolling approaches, delivering improved unmixing performance. Our approach unrolls the alternating direction method of multipliers (ADMMs) solver of a constrained sparse regression problem underlying a linear mixture model. We then propose a neural network structure for abundance estimation that can be trained using supervised learning techniques based on a new composite loss function. We also propose another neural network structure for blind unmixing that can be trained using unsupervised learning techniques. Our proposed networks are also shown to possess a lighter and richer structure containing less learnable parameters and more skip connections compared with other competing architectures. Extensive experiments show that the proposed methods can achieve much faster convergence and better performance even with a very small training dataset size when compared with other unmixing methods, such as model-inspired neural network for abundance estimation (MNN-AE), model-inspired neural network for blind unmixing (MNN-BU), unmixing using deep image prior (UnDIP), and endmember-guided unmixing network (EGU-Net)

    Compressed Sensing With Prior Information: Strategies, Geometry, and Bounds

    Get PDF
    We address the problem of compressed sensing (CS) with prior information: reconstruct a target CS signal with the aid of a similar signal that is known beforehand, our prior information. We integrate the additional knowledge of the similar signal into CS via 1-1 and 1-2 minimization. We then establish bounds on the number of measurements required by these problems to successfully reconstruct the original signal. Our bounds and geometrical interpretations reveal that if the prior information has good enough quality, 1-1 minimization improves the performance of CS dramatically. In contrast, 1-2 minimization has a performance very similar to classical CS, and brings no significant benefits. In addition, we use the insight provided by our bounds to design practical schemes to improve prior information. All our findings are illustrated with experimental results

    Task-Based Quantization for Massive MIMO Channel Estimation

    Get PDF
    Massive multiple-input multiple-output (MIMO) systems are the focus of increasing research attention. In such setups, there is an urgent need to utilize simple low-resolution quantizers, due to power and memory constraints. In this work we study massive MIMO channel estimation with quantized measurements, when the quantization system is designed to minimize the channel estimation error, as opposed to the quantization distortion. We first consider vector quantization, and characterize the minimal error achievable. Next, we focus on practical systems utilizing scalar uniform quantizers, and design the analog and digital processing as well as the quantization dynamic range to optimize the channel estimation accuracy. Our results demonstrate that the resulting massive MIMO system which utilizes low-resolution scalar quantizers can approach the minimal estimation error dictated by ratedistortion theory, achievable using vector quantizers

    Deep learning model-aware regulatization with applications to Inverse Problems

    Get PDF
    There are various inverse problems – including reconstruction problems arising in medical imaging - where one is often aware of the forward operator that maps variables of interest to the observations. It is therefore natural to ask whether such knowledge of the forward operator can be exploited in deep learning approaches increasingly used to solve inverse problems. In this paper, we provide one such way via an analysis of the generalisation error of deep learning approaches to inverse problems. In particular, by building on the algorithmic robustness framework, we offer a generalisation error bound that encapsulates key ingredients associated with the learning problem such as the complexity of the data space, the size of the training set, the Jacobian of the deep neural network and the Jacobian of the composition of the forward operator with the neural network. We then propose a ‘plug-and-play’ regulariser that leverages the knowledge of the forward map to improve the generalization of the network. We likewise also use a new method allowing us to tightly upper bound the Jacobians of the relevant operators that is much more computationally efficient than existing ones. We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches on inverse problems involving various sub-sampling operators such as those used in classical compressed sensing tasks, image super-resolution problems and accelerated Magnetic Resonance Imaging (MRI) setups

    Hardware-Limited Task-Based Quantization

    Get PDF
    Quantization plays a critical role in digital signal processing systems. Quantizers are typically designed to obtain an accurate digital representation of the input signal, operating independently of the system task, and are commonly implemented using serial scalar analog-to-digital converters (ADCs). In this work, we study hardware-limited task-based quantization, where a system utilizing a serial scalar ADC is designed to provide a suitable representation in order to allow the recovery of a parameter vector underlying the input signal. We propose hardware-limited task-based quantization systems for a fixed and finite quantization resolution, and characterize their achievable distortion. We then apply the analysis to the practical setups of channel estimation and eigen-spectrum recovery from quantized measurements. Our results illustrate that properly designed hardware-limited systems can approach the optimal performance achievable with vector quantizers, and that by taking the underlying task into account, the quantization error can be made negligible with a relatively small number of bits

    Robust Large Margin Deep Neural Networks

    Get PDF
    The generalization error of deep neural networks via their classification margin is studied in this paper. Our approach is based on the Jacobian matrix of a deep neural network and can be applied to networks with arbitrary nonlinearities and pooling layers, and to networks with different architectures such as feed forward networks and residual networks. Our analysis leads to the conclusion that a bounded spectral norm of the network's Jacobian matrix in the neighbourhood of the training samples is crucial for a deep neural network of arbitrary depth and width to generalize well. This is a significant improvement over the current bounds in the literature, which imply that the generalization error grows with either the width or the depth of the network. Moreover, it shows that the recently proposed batch normalization and weight normalization reparametrizations enjoy good generalization properties, and leads to a novel network regularizer based on the network's Jacobian matrix. The analysis is supported with experimental results on the MNIST, CIFAR-10, LaRED, and ImageNet datasets

    A combined MMSE-ML detection for a spectrally efficient non orthogonal FDM signal

    Get PDF
    In this paper, we investigate the possibility of reliable and computationally efficient detection for spectrally efficient non-orthogonal Multiplexing (FDM) system, exhibiting varying levels of intercarrier interference. Optimum detection is based on the Maximum Likelihood (ML) principle. However, ML is impractical due to its computational complexity. On the other hand, linear detection techniques such as Zero Forcing (ZF) and Minimum Mean Square Error (MMSE) exhibit poor performance. Consequently, we explore the combination of MMSE estimation with ML estimation around a neighborhood of the MMSE estimate. We evaluate the performance of the different schemes in Additive White Gaussian Noise (AWGN), with reference to the number of FDM carriers and their frequency separation. The combined MMSE-ML scheme achieves a near optimum error performance with polynomial complexity for a small number of BPSK FDM carriers. For QPSK modulation the performance of the proposed system improves for a large number of ML comparisons. In all cases, the detectability of the FDM signal is bounded by the signal dimension and the carriers frequency distance

    Multimodal Image Denoising based on Coupled Dictionary Learning

    Get PDF
    In this paper, we propose a new multimodal image denoising approach to attenuate white Gaussian additive noise in a given image modality under the aid of a guidance image modality. The proposed coupled image denoising approach consists of two stages: coupled sparse coding and reconstruction. The first stage performs joint sparse transform for multimodal images with respect to a group of learned coupled dictionaries, followed by a shrinkage operation on the sparse representations. Then, in the second stage, the shrunken representations, together with coupled dictionaries, contribute to the reconstruction of the denoised image via an inverse transform. The proposed denoising scheme demonstrates the capability to capture both the common and distinct features of different data modalities. This capability makes our approach more robust to inconsistencies between the guidance and the target images, thereby overcoming drawbacks such as the texture copying artifacts. Experiments on real multimodal images demonstrate that the proposed approach is able to better employ guidance information to bring notable benefits in the image denoising task with respect to the state-of-the-art

    Magnetic Resonance Fingerprinting Using a Residual Convolutional Neural Network

    Get PDF
    Conventional dictionary matching based MR Fingerprinting (MRF) reconstruction approaches suffer from time-consuming operations that map temporal MRF signals to quantitative tissue parameters. In this paper, we design a 1-D residual convolutional neural network to perform the signature-to-parameter mapping in order to improve inference speed and accuracy. In particular, a 1-D convolutional neural network with shortcuts, a.k.a skip connections, for residual learning is developed using a TensorFlow platform. To avoid the requirement for a large amount of MRF data, the designed network is trained on synthesized MRF data simulated with the Bloch equations and fast imaging with steady state precession (FISP) sequences. The proposed approach was validated on both synthetic data and phantom data generated from a healthy subject. The reconstruction performance demonstrates a significantly improved speed - only 1.6s for reconstructing a pair of T1/T2 maps of size 128 × 128 - 50× faster than the original dictionary matching based method. The better performance was also confirmed by improved signal to noise ratio (SNR) and reduced root mean square error (RMSE). Furthermore, it is more compact to store a network instead of a large dictionary
    • …
    corecore